# Quantitative Efficient Inference
Medgemma 4b It Q8 0 GGUF
Other
MedGemma-4B-it-Q8_0-GGUF is a GGUF format model converted from google/medgemma-4b-it, specifically designed for image-to-text tasks in the medical field.
Image-to-Text
Transformers

M
NikolayKozloff
142
2
Granite 3.3 8b Instruct Q8 0 GGUF
Apache-2.0
This model is a GGUF format model converted from the IBM Granite-3.3-8B instruction fine-tuned model, suitable for text generation tasks.
Large Language Model
G
NikolayKozloff
36
2
Gemma 3 12b It Q5 K M GGUF
This is a GGUF format model converted from google/gemma-3-12b-it, suitable for the llama.cpp framework.
Large Language Model
G
NikolayKozloff
46
1
Mistral 7B OpenOrca Q4 K M GGUF
Apache-2.0
This model is a GGUF format model converted from Open-Orca/Mistral-7B-OpenOrca, suitable for text generation tasks.
Large Language Model English
M
munish0838
81
2
Featured Recommended AI Models